Officials are worried sensitive data might leak.AI 

Congress Reportedly Restricting Staff from Utilizing AI Models Such as ChatGPT

According to Axios, the utilization of ChatGPT and comparable generative AI tools is subject to stringent restrictions in Congress. A memo from Catherine Szpindor, the administrative head of the House of Representatives, obtained by the news outlet, outlines specific circumstances under which ChatGPT and other large language AI models may be employed in congressional offices. Staff members are only permitted to use the paid ChatGPT Plus service, which has more stringent privacy controls, for “research and evaluation” purposes, and not as a routine part of their work.

Home offices are only allowed to use the chatbot with publicly available information, even if Plus is used, Szpindor adds. Privacy features must be enabled manually to prevent interactions from feeding data into the AI model. ChatGPT’s free tier is currently not allowed like other major language models.

We have asked Parliament for comments and will let you know if we hear back. However, such usage would not be surprising. Institutions and companies have warned against using generative artificial intelligence due to possible accidents and abuses. For example, Republicans were criticized for using an AI-generated attack ad, while Samsung employees allegedly leaked sensitive information via ChatGPT while using a bot at work. Schools have banned these systems because of cheating. House restrictions theoretically prevent similar problems, such as AI-written legislation and speeches.

There may not be much opposition to Parliament’s policy. Both sides of Congress are trying to regulate and otherwise control AI. Rep. Ritchie Torries introduced a bill in the House that would require disclaimers on the use of generative artificial intelligence, while Rep. Yvette Clark wants similar information on political ads. Senators have held hearings on artificial intelligence and introduced a bill that would hold AI developers accountable for harmful content produced on their platforms.

Related posts

Leave a Comment